从单个图像中恢复人头的几何形状,同时对材料和照明进行分解是一个严重不良的问题,需要事先解决。基于3D形态模型(3DMM)及其与可区分渲染器的组合的方法已显示出令人鼓舞的结果。但是,3DMM的表现力受到限制,它们通常会产生过度平滑和身份敏捷的3D形状,仅限于面部区域。最近,使用多层感知器参数化几何形状的神经场获得了高度准确的全头部重建。这些表示形式的多功能性也已被证明可有效解开几何形状,材料和照明。但是,这些方法需要几十个输入图像。在本文中,我们介绍了Sira,该方法从单个图像中,从一个图像中重建了具有高保真度几何形状和分解的灯光和表面材料的人头头像。我们的关键成分是基于神经场的两个数据驱动的统计模型,这些模型可以解决单视3D表面重建和外观分解的歧义。实验表明,Sira获得了最新的状态导致3D头重建,同时它成功地解开了全局照明以及弥漫性和镜面反照率。此外,我们的重建适合基于物理的外观编辑和头部模型重新构建。
translated by 谷歌翻译
Enhancing resilience in distributed networks in the face of malicious agents is an important problem for which many key theoretical results and applications require further development and characterization. This work focuses on the problem of distributed optimization in multi-agent cyberphysical systems, where a legitimate agent's dynamic is influenced both by the values it receives from potentially malicious neighboring agents, and by its own self-serving target function. We develop a new algorithmic and analytical framework to achieve resilience for the class of problems where stochastic values of trust between agents exist and can be exploited. In this case we show that convergence to the true global optimal point can be recovered, both in mean and almost surely, even in the presence of malicious agents. Furthermore, we provide expected convergence rate guarantees in the form of upper bounds on the expected squared distance to the optimal value. Finally, we present numerical results that validate the analytical convergence guarantees we present in this paper even when the malicious agents compose the majority of agents in the network.
translated by 谷歌翻译
Video synthesis methods rapidly improved in recent years, allowing easy creation of synthetic humans. This poses a problem, especially in the era of social media, as synthetic videos of speaking humans can be used to spread misinformation in a convincing manner. Thus, there is a pressing need for accurate and robust deepfake detection methods, that can detect forgery techniques not seen during training. In this work, we explore whether this can be done by leveraging a multi-modal, out-of-domain backbone trained in a self-supervised manner, adapted to the video deepfake domain. We propose FakeOut; a novel approach that relies on multi-modal data throughout both the pre-training phase and the adaption phase. We demonstrate the efficacy and robustness of FakeOut in detecting various types of deepfakes, especially manipulations which were not seen during training. Our method achieves state-of-the-art results in cross-manipulation and cross-dataset generalization. This study shows that, perhaps surprisingly, training on out-of-domain videos (i.e., videos with no speaking humans), can lead to better deepfake detection systems. Code is available on GitHub.
translated by 谷歌翻译
Effective conservation of maritime environments and wildlife management of endangered species require the implementation of efficient, accurate and scalable solutions for environmental monitoring. Ecoacoustics offers the advantages of non-invasive, long-duration sampling of environmental sounds and has the potential to become the reference tool for biodiversity surveying. However, the analysis and interpretation of acoustic data is a time-consuming process that often requires a great amount of human supervision. This issue might be tackled by exploiting modern techniques for automatic audio signal analysis, which have recently achieved impressive performance thanks to the advances in deep learning research. In this paper we show that convolutional neural networks can indeed significantly outperform traditional automatic methods in a challenging detection task: identification of dolphin whistles from underwater audio recordings. The proposed system can detect signals even in the presence of ambient noise, at the same time consistently reducing the likelihood of producing false positives and false negatives. Our results further support the adoption of artificial intelligence technology to improve the automatic monitoring of marine ecosystems.
translated by 谷歌翻译
We derive a learning framework to generate routing/pickup policies for a fleet of vehicles tasked with servicing stochastically appearing requests on a city map. We focus on policies that 1) give rise to coordination amongst the vehicles, thereby reducing wait times for servicing requests, 2) are non-myopic, considering a-priori unknown potential future requests, and 3) can adapt to changes in the underlying demand distribution. Specifically, we are interested in adapting to fluctuations of actual demand conditions in urban environments, such as on-peak vs. off-peak hours. We achieve this through a combination of (i) online play, a lookahead optimization method that improves the performance of rollout methods via an approximate policy iteration step, and (ii) an offline approximation scheme that allows for adapting to changes in the underlying demand model. In particular, we achieve adaptivity of our learned policy to different demand distributions by quantifying a region of validity using the q-valid radius of a Wasserstein Ambiguity Set. We propose a mechanism for switching the originally trained offline approximation when the current demand is outside the original validity region. In this case, we propose to use an offline architecture, trained on a historical demand model that is closer to the current demand in terms of Wasserstein distance. We learn routing and pickup policies over real taxicab requests in downtown San Francisco with high variability between on-peak and off-peak hours, demonstrating the ability of our method to adapt to real fluctuation in demand distributions. Our numerical results demonstrate that our method outperforms rollout-based reinforcement learning, as well as several benchmarks based on classical methods from the field of operations research.
translated by 谷歌翻译
In this study, we propose a lung nodule detection scheme which fully incorporates the clinic workflow of radiologists. Particularly, we exploit Bi-Directional Maximum intensity projection (MIP) images of various thicknesses (i.e., 3, 5 and 10mm) along with a 3D patch of CT scan, consisting of 10 adjacent slices to feed into self-distillation-based Multi-Encoders Network (MEDS-Net). The proposed architecture first condenses 3D patch input to three channels by using a dense block which consists of dense units which effectively examine the nodule presence from 2D axial slices. This condensed information, along with the forward and backward MIP images, is fed to three different encoders to learn the most meaningful representation, which is forwarded into the decoded block at various levels. At the decoder block, we employ a self-distillation mechanism by connecting the distillation block, which contains five lung nodule detectors. It helps to expedite the convergence and improves the learning ability of the proposed architecture. Finally, the proposed scheme reduces the false positives by complementing the main detector with auxiliary detectors. The proposed scheme has been rigorously evaluated on 888 scans of LUNA16 dataset and obtained a CPM score of 93.6\%. The results demonstrate that incorporating of bi-direction MIP images enables MEDS-Net to effectively distinguish nodules from surroundings which help to achieve the sensitivity of 91.5% and 92.8% with false positives rate of 0.25 and 0.5 per scan, respectively.
translated by 谷歌翻译
我们为对抗性多机器人群众跨任务中的决策制定开发了一个有弹性的二进制假设测试框架。该框架利用机器人之间的随机信任观察,以在集中式融合中心(FC)中得出可进行的弹性决策,即使I)在网络中存在恶意机器人,其数量可能大于合法机器人的数量,并且II )FC使用所有机器人的一次性噪声测量。我们得出两种算法来实现这一目标。第一个是两个阶段方法(2SA),该方法基于收到的信任观察估算机器人的合法性,并证明在最严重的恶意攻击中可最大程度地减少检测错误的可能性。在这里,恶意机器人的比例是已知但任意的。对于不明的恶意机器人,我们开发了对抗性的广义似然比测试(A-GLRT),该测试(A-GLRT)都使用报告的机器人测量和信任观察来估计机器人的可信赖性,其报告策略以及同时的正确假设。我们利用特殊的问题结构表明,尽管有几个未知的问题参数,但这种方法仍然可以计算处理。我们在硬件实验中部署了这两种算法,其中一组机器人会在模拟道路网络上进行交通状况的人群,但仍会受到SYBIL攻击的方式。我们从实际通信信号中提取每个机器人的信任观察结果,这些信号提供有关发件人独特性的统计信息。我们表明,即使恶意机器人在大多数情况下,FC也可以将检测误差的可能性降低到2SA和A-GLRT的30.5%和29%。
translated by 谷歌翻译
对于工业规模的广告系统,对广告点击率(CTR)的预测是一个核心问题。广告点击构成了一类重要的用户参与,通常用作广告对用户有用的主要信号。此外,在每次点击收费的广告系统中,单击费用期望值直接输入价值估计。因此,对于大多数互联网广告公司而言,CTR模型开发是一项重大投资。此类问题的工程需要许多适合在线学习的机器学习(ML)技术,这些技术远远超出了传统的准确性改进,尤其是有关效率,可重复性,校准,信用归因。我们介绍了Google搜索广告CTR模型中部署的实用技术的案例研究。本文提供了一项行业案例研究,该研究强调了当前的ML研究的重要领域,并说明了如何评估有影响力的新ML方法并在大型工业环境中有用。
translated by 谷歌翻译
在这项工作中,我们为点击率(CTR)预测引入了一个增量学习框架,并证明了其对Taboola大规模推荐服务的有效性。我们的方法可以通过从先前部署的模型中进行热烈启动并仅对“新鲜”数据进行微调来快速捕捉新兴趋势。过去的知识是通过教师范式维护的,教师充当蒸馏技术,减轻灾难性的遗忘现象。我们的增量学习框架可以显着更快地训练和部署周期(X12加速)。我们证明,每毫米(RPM)在多个交通段中的收入一致,新引入的物品的CTR大幅增加。
translated by 谷歌翻译
我们研究了图结构识别的问题,即在时间序列之间恢复依赖图的图。我们将这些时间序列数据建模为线性随机网络动力学系统状态的组成部分。我们假设部分可观察性,其中仅观察到一个包含网络的节点子集的状态演变。我们设计了一个从观察到的时间序列计算的新功能向量,并证明这些特征是线性可分离的,即存在一个超平面,该超平面将与连接的节点成对相关的特征群体与与断开对相关的节点相关联。这使得可以训练各种分类器进行因果推理的功能。特别是,我们使用这些功能来训练卷积神经网络(CNN)。由此产生的因果推理机制优于最先进的W.R.T.样品复杂性。受过训练的CNN概括了结构上不同的网络(密集或稀疏)和噪声级别的轮廓。值得注意的是,他们在通过合成网络(随机图的实现)训练时也很好地概括了现实世界网络。最后,提出的方法始终以成对的方式重建图,也就是说,通过确定每对相应的时间序列中的每对节点中是否存在边缘或箭头或不存在箭头。这符合大规模系统的框架,在该系统中,网络中所有节点的观察或处理都令人难以置信。
translated by 谷歌翻译